skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Mankovski, Serge"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Trust in data collected by and passing through Internt of Things (IoT) networks is paramount. The quality of decisions made based on this collected data is highly dependent upon the accuracy of the data. Currently, most trust assessment methodologies assume that collected data follows a stationary Gaussian distribution. Often, a trust score is estimated based upon the deviation from this distribution. However, the underlying state of a system monitored by an IoT network can change over time, and the data collected from the network may not consistently follow a Gaussian distribution. Further, faults that occur within the estimated Gaussian distribution may go undetected. In this study, we present a model-based trust estimation system that allows for concept drift or distributions that can change over time. The presented methodology uses data-driven models to estimate the value of the data produced by a sensor using the data produced by the other sensors in the network. We assume that an untrustworthy piece of data falls in the tails of the residual distribution, and we use this concept to assign a trust score. The method is evaluated on a smart home data set consisting of temperature, humidity, and energy sensors. 
    more » « less
  2. Preserving privacy in machine learning on multi-party data is of importance to many domains. In practice, existing solutions suffer from several critical limitations, such as significantly reduced utility under privacy constraints or excessive communication burden between the information fusion center and local data providers. In this paper, we propose and implement a new distributed deep learning framework that addresses these shortcomings and preserves privacy more efficiently than previous methods. During the stochastic gradient descent training of a deep neural network, we focus on the parameters with large absolute gradients in order to save privacy budget consumption. We adopt a generalization of the Report-Noisy-Max algorithm in differential privacy to select these gradients and prove its privacy guarantee rigorously. Inspired by the recent novel idea of Terngrad, we also quantize the released gradients to ternary levels {-B, 0, B}, where B is the bound of gradient clipping. Applying Terngrad can significantly reduce the communication cost without incurring severe accuracy loss. Furthermore, we evaluate the performance of our method on a real-world credit card fraud detection data set consisting of millions of transactions. 
    more » « less